436 research outputs found

    The Effects of Halo Assembly Bias on Self-Calibration in Galaxy Cluster Surveys

    Get PDF
    Self-calibration techniques for analyzing galaxy cluster counts utilize the abundance and the clustering amplitude of dark matter halos. These properties simultaneously constrain cosmological parameters and the cluster observable-mass relation. It was recently discovered that the clustering amplitude of halos depends not only on the halo mass, but also on various secondary variables, such as the halo formation time and the concentration; these dependences are collectively termed assembly bias. Applying modified Fisher matrix formalism, we explore whether these secondary variables have a significant impact on the study of dark energy properties using the self-calibration technique in current (SDSS) and the near future (DES, SPT, and LSST) cluster surveys. The impact of the secondary dependence is determined by (1) the scatter in the observable-mass relation and (2) the correlation between observable and secondary variables. We find that for optical surveys, the secondary dependence does not significantly influence an SDSS-like survey; however, it may affect a DES-like survey (given the high scatter currently expected from optical clusters) and an LSST-like survey (even for low scatter values and low correlations). For an SZ survey such as SPT, the impact of secondary dependence is insignificant if the scatter is 20% or lower but can be enhanced by the potential high scatter values introduced by a highly correlated background. Accurate modeling of the assembly bias is necessary for cluster self-calibration in the era of precision cosmology.Comment: 13 pages, 5 figures, replaced to match published versio

    Hybrid Probabilistic Trajectory Optimization Using Null-Space Exploration

    Get PDF
    In the context of learning from demonstration, human examples are usually imitated in either Cartesian or joint space. However, this treatment might result in undesired movement trajectories in either space. This is particularly important for motion skills such as striking, which typically imposes motion constraints in both spaces. In order to address this issue, we consider a probabilistic formulation of dynamic movement primitives, and apply it to adapt trajectories in Cartesian and joint spaces simultaneously. The probabilistic treatment allows the robot to capture the variability of multiple demonstrations and facilitates the mixture of trajectory constraints from both spaces. In addition to this proposed hybrid space learning, the robot often needs to consider additional constraints such as motion smoothness and joint limits. On the basis of Jacobian-based inverse kinematics, we propose to exploit robot null-space so as to unify trajectory constraints from Cartesian and joint spaces while satisfying additional constraints. Evaluations of hand-shaking and striking tasks carried out with a humanoid robot demonstrate the applicability of our approach

    Kernelized movement primitives

    Get PDF
    Imitation learning has been studied widely as a convenient way to transfer human skills to robots. This learning approach is aimed at extracting relevant motion patterns from human demonstrations and subsequently applying these patterns to different situations. Despite the many advancements that have been achieved, solutions for coping with unpredicted situations (e.g., obstacles and external perturbations) and high-dimensional inputs are still largely absent. In this paper, we propose a novel kernelized movement primitive (KMP), which allows the robot to adapt the learned motor skills and fulfill a variety of additional constraints arising over the course of a task. Specifically, KMP is capable of learning trajectories associated with high-dimensional inputs owing to the kernel treatment, which in turn renders a model with fewer open parameters in contrast to methods that rely on basis functions. Moreover, we extend our approach by exploiting local trajectory representations in different coordinate systems that describe the task at hand, endowing KMP with reliable extrapolation capabilities in broader domains. We apply KMP to the learning of time-driven trajectories as a special case, where a compact parametric representation describing a trajectory and its first-order derivative is utilized. In order to verify the effectiveness of our method, several examples of trajectory modulations and extrapolations associated with time inputs, as well as trajectory adaptations with high-dimensional inputs are provided

    Generalized Task-Parameterized Skill Learning

    Get PDF
    Programming by demonstration has recently gained much attention due to its user-friendly and natural way to transfer human skills to robots. In order to facilitate the learning of multiple demonstrations and meanwhile generalize to new situations, a task-parameterized Gaussian mixture model (TP-GMM) has been recently developed. This model has achieved reliable performance in areas such as human-robot collaboration and dual-arm manipulation. However, the crucial task frames and associated parameters in this learning framework are often set by the human teacher, which renders three problems that have not been addressed yet: (i) task frames are treated equally, without considering their individual importance, (ii) task parameters are defined without taking into account additional task constraints, such as robot joint limits and motion smoothness, and (iii) a fixed number of task frames are pre-defined regardless of whether some of them may be redundant or even irrelevant for the task at hand. In this paper, we generalize the task-parameterized learning by addressing the aforementioned problems. Moreover, we provide a novel learning perspective which allows the robot to refine and adapt previously learned skills in a low dimensional space. Several examples are studied in both simulated and real robotic systems, showing the applicability of our approach

    An Uncertainty-Aware Minimal Intervention Control Strategy Learned from Demonstrations

    Get PDF
    Motivated by the desire to have robots physically present in human environments, in recent years we have witnessed an emergence of different approaches for learning active compliance. Some of the most compelling solutions exploit a minimal intervention control principle, correcting deviations from a goal only when necessary, and among those who follow this concept, several probabilistic techniques have stood out from the rest. However, these approaches are prone to requiring several task demonstrations for proper gain estimation and to generating unpredictable robot motions in the face of uncertainty. Here we present a Programming by Demonstration approach for uncertainty-aware impedance regulation, aimed at making the robot compliant - and safe to interact with - when the uncertainty about its predicted actions is high. Moreover, we propose a data-efficient strategy, based on the energy observed during demonstrations, to achieve minimal intervention control, when the uncertainty is low. The approach is validated in an experimental scenario, where a human collaboratively moves an object with a 7-DoF torque-controlled robot

    Non-parametric Imitation Learning of Robot Motor Skills

    Get PDF
    Unstructured environments impose several challenges when robots are required to perform different tasks and adapt to unseen situations. In this context, a relevant problem arises: how can robots learn to perform various tasks and adapt to different conditions? A potential solution is to endow robots with learning capabilities. In this line, imitation learning emerges as an intuitive way to teach robots different motor skills. This learning approach typically mimics human demonstrations by extracting invariant motion patterns and subsequently applies these patterns to new situations. In this paper, we propose a novel kernel treatment of imitation learning, which endows the robot with imitative and adaptive capabilities. In particular, due to the kernel treatment, the proposed approach is capable of learning human skills associated with high-dimensional inputs. Furthermore, we study a new concept of correlation-adaptive imitation learning, which allows for the adaptation of correlations exhibited in high-dimensional demonstrated skills. Several toy examples and a collaborative task with a real robot are provided to verify the effectiveness of our approach

    The Aemulus Project III: Emulation of the Galaxy Correlation Function

    Get PDF
    Using the N-body simulations of the AEMULUS Project, we construct an emulator for the non-linear clustering of galaxies in real and redshift space. We construct our model of galaxy bias using the halo occupation framework, accounting for possible velocity bias. The model includes 15 parameters, including both cosmological and galaxy bias parameters. We demonstrate that our emulator achieves ~ 1% precision at the scales of interest, 0.1<r<10 h^{-1} Mpc, and recovers the true cosmology when tested against independent simulations. Our primary parameters of interest are related to the growth rate of structure, f, and its degenerate combination fsigma_8. Using this emulator, we show that the constraining power on these parameters monotonically increases as smaller scales are included in the analysis, all the way down to 0.1 h^{-1} Mpc. For a BOSS-like survey, the constraints on fsigma_8 from r<30 h^{-1} Mpc scales alone are more than a factor of two tighter than those from the fiducial BOSS analysis of redshift-space clustering using perturbation theory at larger scales. The combination of real- and redshift-space clustering allows us to break the degeneracy between f and sigma_8, yielding a 9% constraint on f alone for a BOSS-like analysis. The current AEMULUS simulations limit this model to surveys of massive galaxies. Future simulations will allow this framework to be extended to all galaxy target types, including emission-line galaxies.Comment: 14 pages, 8 figures, 1 table; submitted to ApJ; the project webpage is available at https://aemulusproject.github.io ; typo in Figure 7 and caption updated, results unchange

    The Aemulus Project I: Numerical Simulations for Precision Cosmology

    Get PDF
    The rapidly growing statistical precision of galaxy surveys has lead to a need for ever-more precise predictions of the observables used to constrain cosmological and galaxy formation models. The primary avenue through which such predictions will be obtained is suites of numerical simulations. These simulations must span the relevant model parameter spaces, be large enough to obtain the precision demanded by upcoming data, and be thoroughly validated in order to ensure accuracy. In this paper we present one such suite of simulations, forming the basis for the AEMULUS Project, a collaboration devoted to precision emulation of galaxy survey observables. We have run a set of 75 (1.05 h^-1 Gpc)^3 simulations with mass resolution and force softening of 3.51\times 10^10 (Omega_m / 0.3) ~ h^-1 M_sun and 20 ~ h^-1 kpc respectively in 47 different wCDM cosmologies spanning the range of parameter space allowed by the combination of recent Cosmic Microwave Background, Baryon Acoustic Oscillation and Type Ia Supernovae results. We present convergence tests of several observables including spherical overdensity halo mass functions, galaxy projected correlation functions, galaxy clustering in redshift space, and matter and halo correlation functions and power spectra. We show that these statistics are converged to 1% (2%) for halos with more than 500 (200) particles respectively and scales of r>200 ~ h^-1 kpc in real space or k ~ 3 h Mpc^-1 in harmonic space for z\le 1. We find that the dominant source of uncertainty comes from varying the particle loading of the simulations. This leads to large systematic errors for statistics using halos with fewer than 200 particles and scales smaller than k ~ 4 h^-1 Mpc. We provide the halo catalogs and snapshots detailed in this work to the community at https://AemulusProject.github.io.Comment: 16 pages, 12 figures, 3 Tables Project website: https://aemulusproject.github.io

    Cosmological Constraints from the Large Scale Weak Lensing of SDSS MaxBCG Clusters

    Full text link
    We derive constraints on the matter density \Om and the amplitude of matter clustering \sig8 from measurements of large scale weak lensing (projected separation R=5-30\hmpc) by clusters in the Sloan Digital Sky Survey MaxBCG catalog. The weak lensing signal is proportional to the product of \Om and the cluster-mass correlation function \xicm. With the relation between optical richness and cluster mass constrained by the observed cluster number counts, the predicted lensing signal increases with increasing \Om or \sig8, with mild additional dependence on the assumed scatter between richness and mass. The dependence of the signal on scale and richness partly breaks the degeneracies among these parameters. We incorporate external priors on the richness-mass scatter from comparisons to X-ray data and on the shape of the matter power spectrum from galaxy clustering, and we test our adopted model for \xicm against N-body simulations. Using a Bayesian approach with minimal restrictive priors, we find \sig8(\Om/0.325)^{0.501}=0.828 +/- 0.049, with marginalized constraints of \Om=0.325_{-0.067}^{+0.086} and \sig8=0.828_{-0.097}^{+0.111}, consistent with constraints from other MaxBCG studies that use weak lensing measurements on small scales (R<=2\hmpc). The (\Om,\sig8) constraint is consistent with and orthogonal to the one inferred from WMAP CMB data, reflecting agreement with the structure growth predicted by GR for an LCDM cosmological model. A joint constraint assuming LCDM yields \Om=0.298 +/- 0.020 and \sig8=0.831 +/- 0.020. Our cosmological parameter errors are dominated by the statistical uncertainties of the large scale weak lensing measurements, which should shrink sharply with current and future imaging surveys.Comment: 20 pages, 12 figures, MNRAS Submitted. For a brief video explaining the key result of this paper, see http://www.youtube.com/user/OSUAstronomy, or http://v.youku.com/v_show/id_XNDI3ODA3NzY4.html in countries where YouTube is not accessibl

    The Aemulus Project II: Emulating the Halo Mass Function

    Get PDF
    Existing models for the dependence of the halo mass function on cosmological parameters will become a limiting source of systematic uncertainty for cluster cosmology in the near future. We present a halo mass function emulator and demonstrate improved accuracy relative to state-of-the-art analytic models. In this work, mass is defined using an overdensity criteria of 200 relative to the mean background density. Our emulator is constructed from the AEMULUS simulations, a suite of 40 N-body simulations with snapshots from z=3 to z=0. These simulations cover the flat wCDM parameter space allowed by recent Cosmic Microwave Background, Baryon Acoustic Oscillation and Type Ia Supernovae results, varying the parameters w, Omega_m, Omega_b, sigma_8, N_{eff}, n_s, and H_0. We validate our emulator using five realizations of seven different cosmologies, for a total of 35 test simulations. These test simulations were not used in constructing the emulator, and were run with fully independent initial conditions. We use our test simulations to characterize the modeling uncertainty of the emulator, and introduce a novel way of marginalizing over the associated systematic uncertainty. We confirm non-universality in our halo mass function emulator as a function of both cosmological parameters and redshift. Our emulator achieves better than 1% precision over much of the relevant parameter space, and we demonstrate that the systematic uncertainty in our emulator will remain a negligible source of error for cluster abundance studies through at least the LSST Year 1 data set.Comment: https://aemulusproject.github.io
    • 

    corecore